self ask with search
MEASURING AND NARROWING THE COMPOSITIONALITY GAP IN LANGUAGE MODELS
We investigate the ability of language models to perform compositional reasoning tasks where the overall solution depends on correctly composing the answers to sub-problems. We measure how often models can correctly answer all subproblems but not generate the overall solution, a ratio we call the compositionality gap. We evaluate this ratio by asking multi-hop questions with answers that require composing multiple facts unlikely to have been observed together during pretraining. In the GPT-3 family of models, as model size increases we show that the single-hop question answering performance improves faster than the multihop performance does, therefore the compositionality gap does not decrease. This surprising result suggests that while more powerful models memorize and recall more factual knowledge, they show no corresponding improvement in their ability to perform this kind of compositional reasoning.
We then demonstrate how elicitive prompting (such as chain of thought) narrows the compositionality gap by reasoning explicitly instead of implicitly. We present a new method, self-ask, that further improves on chain of thought. In our method, the model explicitly asks itself (and then answers) follow-up questions before answering the initial question. We finally show that self-ask’s structured prompting lets us easily plug in a search engine to answer the follow-up questions, which additionally improves accuracy
(DeepL)
We investigate the ability of language models to perform constructive reasoning tasks. We measure the frequency at which the model correctly answers all subproblems but fails to produce an overall solution, and call this ratio the compositionality gap. We evaluate this ratio by asking multi-hop questions with answers that require combining multiple facts that are unlikely to be observed together in prior learning; in the GPT-3 model, as model size increases, single-hop question answering performance improves faster than multi-hop performance, thus the constructability gap was shown not to decrease. This surprising result suggests that while more powerful models store and recall more factual knowledge, there is no commensurate improvement in their ability to perform this type of constructive inference.
Therefore, we introduce a method of intuitive prompting such as chain of thought to demonstrate that explicit rather than implicit reasoning can close the constructivity gap. We also introduce a new method, self-asking, which improves on chain of thought. In this method, the model explicitly asks (and answers) questions to itself before answering the first question, and then asks follow-up questions. Furthermore, we show that the structured prompts in self-ask make it easy to plug in a search engine to answer follow-up questions, thereby further improving accuracy. ---
This page is auto-translated from /nishio/self ask with search using DeepL. If you looks something interesting but the auto-translated English is not good enough to understand it, feel free to let me know at @nishio_en. I'm very happy to spread my thought to non-Japanese readers.